76 research outputs found

    Optimization under uncertainty and risk: Quadratic and copositive approaches

    Get PDF
    Robust optimization and stochastic optimization are the two main paradigms for dealing with the uncertainty inherent in almost all real-world optimization problems. The core principle of robust optimization is the introduction of parameterized families of constraints. Sometimes, these complicated semi-infinite constraints can be reduced to finitely many convex constraints, so that the resulting optimization problem can be solved using standard procedures. Hence flexibility of robust optimization is limited by certain convexity requirements on various objects. However, a recent strain of literature has sought to expand applicability of robust optimization by lifting variables to a properly chosen matrix space. Doing so allows to handle situations where convexity requirements are not met immediately, but rather intermediately. In the domain of (possibly nonconvex) quadratic optimization, the principles of copositive optimization act as a bridge leading to recovery of the desired convex structures. Copositive optimization has established itself as a powerful paradigm for tackling a wide range of quadratically constrained quadratic optimization problems, reformulating them into linear convex-conic optimization problems involving only linear constraints and objective, plus constraints forcing membership to some matrix cones, which can be thought of as generalizations of the positive-semidefinite matrix cone. These reformulations enable application of powerful optimization techniques, most notably convex duality, to problems which, in their original form, are highly nonconvex. In this text we want to offer readers an introduction and tutorial on these principles of copositive optimization, and to provide a review and outlook of the literature that applies these to optimization problems involving uncertainty

    Hessian barrier algorithms for linearly constrained optimization problems

    Get PDF
    In this paper, we propose an interior-point method for linearly constrained optimization problems (possibly nonconvex). The method - which we call the Hessian barrier algorithm (HBA) - combines a forward Euler discretization of Hessian Riemannian gradient flows with an Armijo backtracking step-size policy. In this way, HBA can be seen as an alternative to mirror descent (MD), and contains as special cases the affine scaling algorithm, regularized Newton processes, and several other iterative solution methods. Our main result is that, modulo a non-degeneracy condition, the algorithm converges to the problem's set of critical points; hence, in the convex case, the algorithm converges globally to the problem's minimum set. In the case of linearly constrained quadratic programs (not necessarily convex), we also show that the method's convergence rate is O(1/kρ)\mathcal{O}(1/k^\rho) for some ρ(0,1]\rho\in(0,1] that depends only on the choice of kernel function (i.e., not on the problem's primitives). These theoretical results are validated by numerical experiments in standard non-convex test functions and large-scale traffic assignment problems.Comment: 27 pages, 6 figure

    Active set complexity of the Away-step Frank-Wolfe Algorithm

    Get PDF
    In this paper, we study active set identification results for the away-step Frank-Wolfe algorithm in different settings. We first prove a local identification property that we apply, in combination with a convergence hypothesis, to get an active set identification result. We then prove, in the nonconvex case, a novel O(1/k)O(1/\sqrt{k}) convergence rate result and active set identification for different stepsizes (under suitable assumptions on the set of stationary points). By exploiting those results, we also give explicit active set complexity bounds for both strongly convex and nonconvex objectives. While we initially consider the probability simplex as feasible set, in the appendix we show how to adapt some of our results to generic polytopes.Comment: 23 page

    Hessian barrier algorithms for linearly constrained optimization problems

    Get PDF
    International audienceIn this paper, we propose an interior-point method for linearly constrained-and possibly nonconvex-optimization problems. The method-which we call the Hessian barrier algorithm (HBA)-combines a forward Euler discretization of Hessian-Riemannian gradient flows with an Armijo backtracking step-size policy. In this way, HBA can be seen as an alternative to mirror descent, and contains as special cases the affine scaling algorithm, regularized Newton processes, and several other iterative solution methods. Our main result is that, modulo a nondegeneracy condition, the algorithm converges to the problem's critical set; hence, in the convex case, the algorithm converges globally to the problem's minimum set. In the case of linearly constrained quadratic programs (not necessarily convex), we also show that the method's convergence rate is O(1/kρ)O(1/k^\rho) for some ρ(0,1]\rho \in (0, 1] that depends only on the choice of kernel function (i.e., not on the problem's primi-tives). These theoretical results are validated by numerical experiments in standard nonconvex test functions and large-scale traffic assignment problems

    Copositivity and constrained fractional quadratic programs

    Get PDF
    Abstract We provide Completely Positive and Copositive Optimization formulations for the Constrained Fractional Quadratic Problem (CFQP) and Standard Fractional Quadratic Problem (StFQP). Based on these formulations, Semidefinite Programming (SDP) relaxations are derived for finding good lower bounds to these fractional programs, which can be used in a global optimization branch-and-bound approach. Applications of the CFQP and StFQP, related with the correction of infeasible linear systems and eigenvalue complementarity problems are also discussed
    corecore